-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🐛 Set nofile ulimit for loadbalancer container #7344
🐛 Set nofile ulimit for loadbalancer container #7344
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@@ -95,6 +97,8 @@ type RunContainerInput struct { | |||
PortMappings []PortMapping | |||
// IPFamily is the IP version to use. | |||
IPFamily clusterv1.ClusterIPFamily | |||
// Resource limits and settings for the container. | |||
Resources dockercontainer.Resources |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This could also be exposed at the DockerCluster level for resource limits on the load balancer, but I wanted some feedback before making this a user-facing change.
f51da8b
to
bfb3a3c
Compare
/hold Need to be sure this doesn't impact functionality on other systems. I might be safter to set this ulimit at a low-middle level e.g. 8000-20000 rather than a high level in case some platform setups have upper limits. |
65536 is fairly conservative. The upper limit is based on the highest value of an unsigned int (65536), 10% RAM in KB and the value of the NR_FILE compilation variable, so 65536 will be the lowest number. That said, you're likely on Fedora to want to set a systemd limit for docker as you're going to run into other issues anyway. I've just rebuilt my desktop, and need to check how i did that. EDIT: I think this is actually related to cgroupsv2, so we will see it more as more things default to cgroupsv2 |
I'm not sure - the issue only started impacting me in recent months and seemed to be dependent on Docker / containerd version. Maybe it's linked to a config there. |
But I wonder if it's still enough. Considering that 65536 open files sounds like a lot for a haproxy in a CAPD cluster. (but I have no idea how much it usually uses, is there an easy way to check that?) |
@killianmuldoon What about:
|
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
@killianmuldoon: PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
I've tested with HAProxy 2.6 ( I have built capd with this patch, using existing HAProxy image, and it works like a charm. Is there anything I can do to help get this merged? |
I need time to get back to this 🙃 . I've been using the workaround of setting the ulimits globally in my docker config for now while we figure out the right way to do this. Currently as a workaround I have the following in my systemd unit file
The BTW - if you're interested in picking this up I'm happy to hand it over! |
The last time we discussed this, I was ok with the upper number. Especially for MacOS, the worst thing that happens is you have to restart your Docker Engine. And this is all local execution so there's no remote attack vector. |
I described the the root cause in kubernetes-sigs/kind#2954 (comment), and fixed it in kubernetes-sigs/kind#3115. In light of that, I don't think we should change the file descriptor limit here, unless we have other reasons to do so. I presume the next kind release will have the fix above. For now, I use a workaround similar to what @killianmuldoon described in #7344 (comment). |
/close This has been partially fixed by #8246. The current state is that the haproxy image will init and likely crash on startup, but once CAPD is writing the config it will be stable. The final fix will be to pick up a new kindest/haproxy image once they publish one, or to move to haproxytech/haproxy-alpine, either of which include the maxconn haproxy.cfg. I'll open an issue to track that update. |
@killianmuldoon: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Hardcode a nofile ulimit when running the load balancer container. I set the limit to a quite high number, but I don't think it's required to be that high for CAPD clusters.
This change is intended to solve: docker-library/haproxy#194 which impacts CAPD on Fedora, and possibly other linux distros. In future the addition of Resource setting to the run container config structs could be used to set other kinds of limits e.g. Memory, CPU.